49 research outputs found

    Strategic Payments in Financial Networks

    Get PDF
    In their seminal work on systemic risk in financial markets, Eisenberg and Noe [Larry Eisenberg and Thomas Noe, 2001] proposed and studied a model with n firms embedded into a network of debt relations. We analyze this model from a game-theoretic point of view. Every firm is a rational agent in a directed graph that has an incentive to allocate payments in order to clear as much of its debt as possible. Each edge is weighted and describes a liability between the firms. We consider several variants of the game that differ in the permissible payment strategies. We study the existence and computational complexity of pure Nash and strong equilibria, and we provide bounds on the (strong) prices of anarchy and stability for a natural notion of social welfare. Our results highlight the power of financial regulation - if payments of insolvent firms can be centrally assigned, a socially optimal strong equilibrium can be found in polynomial time. In contrast, worst-case strong equilibria can be a factor of ?(n) away from optimal, and, in general, computing a best response is an NP-hard problem. For less permissible sets of strategies, we show that pure equilibria might not exist, and deciding their existence as well as computing them if they exist constitute NP-hard problems

    Training Fully Connected Neural Networks is ∃R\exists\mathbb{R}-Complete

    Full text link
    We consider the algorithmic problem of finding the optimal weights and biases for a two-layer fully connected neural network to fit a given set of data points. This problem is known as empirical risk minimization in the machine learning community. We show that the problem is ∃R\exists\mathbb{R}-complete. This complexity class can be defined as the set of algorithmic problems that are polynomial-time equivalent to finding real roots of a polynomial with integer coefficients. Our results hold even if the following restrictions are all added simultaneously. ∙\bullet There are exactly two output neurons. ∙\bullet There are exactly two input neurons. ∙\bullet The data has only 13 different labels. ∙\bullet The number of hidden neurons is a constant fraction of the number of data points. ∙\bullet The target training error is zero. ∙\bullet The ReLU activation function is used. This shows that even very simple networks are difficult to train. The result offers an explanation (though far from a complete understanding) on why only gradient descent is widely successful in training neural networks in practice. We generalize a recent result by Abrahamsen, Kleist and Miltzow [NeurIPS 2021]. This result falls into a recent line of research that tries to unveil that a series of central algorithmic problems from widely different areas of computer science and mathematics are ∃R\exists\mathbb{R}-complete: This includes the art gallery problem [JACM/STOC 2018], geometric packing [FOCS 2020], covering polygons with convex polygons [FOCS 2021], and continuous constraint satisfaction problems [FOCS 2021].Comment: 38 pages, 18 figure

    The Complexity of Recognizing Geometric Hypergraphs

    Full text link
    As set systems, hypergraphs are omnipresent and have various representations ranging from Euler and Venn diagrams to contact representations. In a geometric representation of a hypergraph H=(V,E)H=(V,E), each vertex v∈Vv\in V is associated with a point pv∈Rdp_v\in \mathbb{R}^d and each hyperedge e∈Ee\in E is associated with a connected set se⊂Rds_e\subset \mathbb{R}^d such that {pv∣v∈V}∩se={pv∣v∈e}\{p_v\mid v\in V\}\cap s_e=\{p_v\mid v\in e\} for all e∈Ee\in E. We say that a given hypergraph HH is representable by some (infinite) family FF of sets in Rd\mathbb{R}^d, if there exist P⊂RdP\subset \mathbb{R}^d and S⊆FS \subseteq F such that (P,S)(P,S) is a geometric representation of HH. For a family F, we define RECOGNITION(F) as the problem to determine if a given hypergraph is representable by F. It is known that the RECOGNITION problem is ∃R\exists\mathbb{R}-hard for halfspaces in Rd\mathbb{R}^d. We study the families of translates of balls and ellipsoids in Rd\mathbb{R}^d, as well as of other convex sets, and show that their RECOGNITION problems are also ∃R\exists\mathbb{R}-complete. This means that these recognition problems are equivalent to deciding whether a multivariate system of polynomial equations with integer coefficients has a real solution.Comment: Appears in the Proceedings of the 31st International Symposium on Graph Drawing and Network Visualization (GD 2023) 17 pages, 11 figure

    Dynamical Mass Estimates of Large-Scale Filaments in Redshift Surveys

    Get PDF
    We propose a new method to measure the mass of large-scale filaments in galaxy redshift surveys. The method is based on the fact that the mass per unit length of isothermal filaments depends only on their transverse velocity dispersion. Filaments that lie perpendicular to the line of sight may therefore have their mass per unit length measured from their thickness in redshift space. We present preliminary tests of the method and find that it predicts the mass per unit length of filaments in an N-body simulation to an accuracy of ~35%. Applying the method to a select region of the Perseus-Pisces supercluster yields a mass-to-light ratio M/L_B around 460h in solar units to within a factor of two. The method measures the mass-to-light ratio on length scales of up to 50h^(-1) Mpc and could thereby yield new information on the behavior of the dark matter on mass scales well beyond that of clusters of galaxies.Comment: 21 pages, LaTeX with 6 figures included. Submitted to Ap

    HOP: A New Group-Finding Algorithm for N-body Simulations

    Get PDF
    We describe a new method (HOP) for identifying groups of particles in N-body simulations. Having assigned to every particle an estimate of its local density, we associate each particle with the densest of the N_hop particles nearest to it. Repeating this process allows us to trace a path, within the particle set itself, from each particle in the direction of increasing density. The path ends when it reaches a particle that is its own densest neighbor; all particles reaching the same such particle are identified as a group. Combined with an adaptive smoothing kernel for finding the densities, this method is spatially adaptive, coordinate-free, and numerically straight-forward. One can proceed to process the output by truncating groups at a particular density contour and combining groups that share a (possibly different) density contour. While the resulting algorithm has several user-chosen parameters, we show that the results are insensitive to most of these, the exception being the outer density cutoff of the groups.Comment: LaTeX, 18 pages, 7 Postscript figures included. ApJ, in press. Source code available from http://www.sns.ias.edu/~eisenste/hop/hop.htm

    The Identity of Information: How Deterministic Dependencies Constrain Information Synergy and Redundancy

    Get PDF
    Understanding how different information sources together transmit information is crucial in many domains. For example, understanding the neural code requires characterizing how different neurons contribute unique, redundant, or synergistic pieces of information about sensory or behavioral variables. Williams and Beer (2010) proposed a partial information decomposition (PID) that separates the mutual information that a set of sources contains about a set of targets into nonnegative terms interpretable as these pieces. Quantifying redundancy requires assigning an identity to different information pieces, to assess when information is common across sources. Harder et al. (2013) proposed an identity axiom that imposes necessary conditions to quantify qualitatively common information. However, Bertschinger et al. (2012) showed that, in a counterexample with deterministic target-source dependencies, the identity axiom is incompatible with ensuring PID nonnegativity. Here, we study systematically the consequences of information identity criteria that assign identity based on associations between target and source variables resulting from deterministic dependencies. We show how these criteria are related to the identity axiom and to previously proposed redundancy measures, and we characterize how they lead to negative PID terms. This constitutes a further step to more explicitly address the role of information identity in the quantification of redundancy. The implications for studying neural coding are discussed

    Free streaming in mixed dark matter

    Full text link
    Free streaming in a \emph{mixture} of collisionless non-relativistic dark matter (DM) particles is studied by implementing methods from the theory of multicomponent plasmas. The mixture includes Fermionic, condensed and non condensed Bosonic particles decoupling in equilibrium while relativistic, heavy non-relativistic thermal relics (WIMPs), and sterile neutrinos that decouple \emph{out of equilibrium} when they are relativistic. The free-streaming length λfs\lambda_{fs} is obtained from the marginal zero of the gravitational polarization function, which separates short wavelength Landau-damped from long wavelength Jeans-unstable \emph{collective} modes. At redshift zz we find 1λfs2(z)=1(1+z)[0.071kpc]2∑aνagd,a2/3(ma/keV)2Ia \frac{1}{\lambda^2_{fs}(z)}= \frac{1}{(1+z)} \big[\frac{0.071}{\textrm{kpc}} \big]^2 \sum_{a}\nu_a g^{2/3}_{d,a}({m_a}/{\mathrm{keV}})^2 I_a ,where 0≤νa≤10\leq \nu_a \leq 1 are the \emph{fractions} of the respective DM components of mass mam_a that decouple when the effective number of ultrarelativistic degrees of freedom is gd,ag_{d,a}, and IaI_a only depend on the distribution functions at decoupling, given explicitly in all cases. If sterile neutrinos produced either resonantly or non-resonantly that decouple near the QCD scale are the \emph{only} DM component,we find λfs(0)≃7kpc(keV/m)\lambda_{fs}(0) \simeq 7 \mathrm{kpc} (\mathrm{keV}/m) (non-resonant), λfs(0)≃1.73kpc(keV/m)\lambda_{fs}(0) \simeq 1.73 \mathrm{kpc} (\mathrm{keV}/m) (resonant).If WIMPs with mwimp≳100GeVm_{wimp} \gtrsim 100 \mathrm{GeV} decoupling at Td≳10MeVT_d \gtrsim 10 \mathrm{MeV} are present in the mixture with νwimp≫10−12\nu_{wimp} \gg 10^{-12},λfs(0)≲6.5×10−3pc\lambda_{fs}(0) \lesssim 6.5 \times 10^{-3} \mathrm{pc} is \emph{dominated} by CDM. If a Bose Einstein condensate is a DM component its free streaming length is consistent with CDM because of the infrared enhancement of the distribution function.Comment: 19 pages, 2 figures. More discussions same conclusions and results. Version to appear in Phys. Rev.

    Constraining the expansion history of the universe from the red shift evolution of cosmic shear

    Full text link
    We present a quantitative analysis of the constraints on the total equation of state parameter that can be obtained from measuring the red shift evolution of the cosmic shear. We compare the constraints that can be obtained from measurements of the spin two angular multipole moments of the cosmic shear to those resulting from the two dimensional and three dimensional power spectra of the cosmic shear. We find that if the multipole moments of the cosmic shear are measured accurately enough for a few red shifts the constraints on the dark energy equation of state parameter improve significantly compared to those that can be obtained from other measurements.Comment: 17 pages, 4 figure
    corecore